Skip to content

Add AOTAutogradCache to caching tutorials #3177

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 6, 2024
Merged

Conversation

jamesjwu
Copy link
Contributor

@jamesjwu jamesjwu commented Dec 6, 2024

Add descriptions of new configuration settings exposed by AOTAutogradCache (pytorch/pytorch#141981).

cc @williamwen42 @msaroufim @anijain2305

Copy link

pytorch-bot bot commented Dec 6, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3177

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit e78f450 with merge base 25f1156 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@svekars svekars requested review from oulgen and masnesral December 6, 2024 18:38
@svekars svekars added the torch.compile Torch compile and other relevant tutorials label Dec 6, 2024
@svekars svekars merged commit e744f35 into pytorch:main Dec 6, 2024
20 checks passed
@@ -23,7 +23,7 @@ Before starting this recipe, make sure that you have the following:
Inductor Cache Settings
----------------------------

Most of these caches are in-memory, only used within the same process, and are transparent to the user. An exception is the FX graph cache that stores compiled FX graphs. This cache allows Inductor to avoid recompilation across process boundaries when it encounters the same graph with the same Tensor input shapes (and the same configuration). The default implementation stores compiled artifacts in the system temp directory. An optional feature also supports sharing those artifacts within a cluster by storing them in a Redis database.
Most of these caches are in-memory, only used within the same process, and are transparent to the user. An exception are caches tha stores compiled FX graphs (FXGraphCache, AOTAutogradCache). These caches allow Inductor to avoid recompilation across process boundaries when it encounters the same graph with the same Tensor input shapes (and the same configuration). The default implementation stores compiled artifacts in the system temp directory. An optional feature also supports sharing those artifacts within a cluster by storing them in a Redis database.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"An exception are" -> "An exception is" or "Exceptions are"
"tha" -> "that"

svekars added a commit that referenced this pull request Dec 6, 2024
svekars added a commit that referenced this pull request Dec 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed torch.compile Torch compile and other relevant tutorials
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants